360° Situational Awareness

360° Situational Awareness
What is 360° Situational Awareness?
360° Situational Awareness refers to the ability to monitor an entire environment in all directions—without blind spots. This is achieved by stitching together video streams from multiple cameras positioned at different angles to create a seamless panoramic view.
With PTZ (Pan-Tilt-Zoom) control, users can navigate the scene interactively—panning across the horizon, tilting the view vertically, and zooming in on areas of interest—just like operating a high-end security camera. This technology is ideal for applications like surveillance, autonomous vehicles, robotics, and industrial monitoring, where full visibility and control are critical.
Real Use Case Scenario
A traffic control center in a busy city needs to monitor everything happening at a large intersection: cars coming from every direction, people crossing the street, buses turning, and even possible accidents. Instead of using separate video feeds from different cameras, they use a system that combines images from multiple wide-angle cameras into a single 360-degree view. This makes it easier to watch everything happening around the intersection in real time without switching between screens.
Using a simple interface, the team can move the view left or right, look up or down, and zoom in to focus on details—like identifying a person or checking a suspicious bag. This gives them full control and awareness of the scene, helping them respond quickly to emergencies, manage crowds, and even enhance monitoring with AI systems for automatic alerts.

How can RDS help you build your 360° Situational Awareness system?
The RidgeRun Development Suite (RDS) enables seamless 360° video experiences by combining multiple video streams into a unified panoramic view. Users can interactively navigate the scene using pan, tilt, and zoom (PTZ) controls, making it ideal for applications like surveillance, robotics, and situational awareness.
This functionality is powered by a set of integrated RidgeRun plugins, including:
- CUDA Stitcher – for real-time image stitching optimized for NVIDIA Jetson platforms
- GstProjector – to project stitched images onto a navigable viewing plane
- Spherical Panorama PTZ – enabling smooth and intuitive PTZ navigation across spherical views
See RDS in action for 360° Situational Awareness
The easiest way to see our products in action is by running the included demo applications. The Stitching 360 + PTZ demo application is designed to show you how RDS can help you build a 360° Situational Awareness system. In order to run the demo application, follow these steps:
1. Start RR-Media demo application
rr-media
2. Select Sitching 360 + PTZ from the application menu
7. Stitcher 360 + PTZ Plugin Select plugin [0/1/2/3/4/5/6/7]: 7
3. Start the demo by selecting Run
▶ Stitcher 360 + PTZ Plugin ┌──────┬──────────────────────────────┐ │ 1 │ Performance monitoring (OFF) │ │ 2 │ Run │ │ 3 │ Back │ │ 4 │ Exit │ └──────┴──────────────────────────────┘
A window showing something like this should appear.

Use the following keys to control camera movement within the scene:
================================================== Demo Controls: Tilt : Up / Down arrows Pan : Left / Right arrows Zoom In : 'i' key Zoom Out : 'o' key Press these keys to control the camera view. ==================================================
Build your own 360° Situational Awareness system
1. Start with rr-media API
Now that you saw RDS in action, it's time to build your application. We recommend that you start by using RR-Media API, this will allow you to quickly build your own Proof of concept (POC) with an easy-to-use Python API.
For this, we will need the following rr-media modules:
- gst.source.file: used to read videos from a file.
- gst.filter.fisheyetoeqr: used to project your videos into equirectangular space.
- jetson.miso.stitcher: used to stitch your videos and generate a 360 degree video.
- gst.filter.ptz: takes your 360 degree video and allows you to navigate through it using PTZ controls.
- jetson.sink.video: allows you to display your video on screen.
We will use the ModuleGraph module to build the following graph:

Your Python script should look like this:
from rrmedia.media.core.factory import ModuleFactory from rrmedia.media.core.graph import ModuleGraph # Create graph graph = ModuleGraph() # Directory containing your videos video_dir = "path/to/videos/stitcher360ptz" # Add source files (update with your own videos). graph.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/360-s0.mp4", name="cam0" )) graph.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/360-s1.mp4", name="cam1" )) # Fisheye-to-equirectangular filters (update with your own coefficients). graph.add(ModuleFactory.create( "gst.filter.fisheyetoeqr", radius=750, lens=187, centerx=993, centery=762, rotx=0.0, roty=0.0, rotz=-89.6, name="proj0" )) graph.add(ModuleFactory.create( "gst.filter.fisheyetoeqr", radius=750, lens=186, centerx=1044, centery=776, rotx=0.0, roty=0.0, rotz=88.7, name="proj1" )) # Stitcher (update homography with your own). graph.add(ModuleFactory.create( "jetson.miso.stitcher", name="stitcher", homographies_json=f"{video_dir}/homography.json" )) # PTZ filter graph.add(ModuleFactory.create( "gst.filter.ptz", name="ptz" )) # Video sink graph.add(ModuleFactory.create( "jetson.sink.video", name="video_sink", extra_latency=110000000 )) # Connect modules graph.connect("cam0", "proj0") graph.connect("cam1", "proj1") graph.connect("proj0", "stitcher", 0) graph.connect("proj1", "stitcher", 1) graph.connect("stitcher", "ptz") graph.connect("ptz", "video_sink") # Print pipeline print("Graph pipeline: %s", graph.dump_launch()) # Start playback graph.play() # Start loop (this is a blocking function) graph.loop() # To change PTZ properties at runtime, use: # graph.set_property("ptz", "pan", <NEW_PAN_VALUE>) # graph.set_property("ptz", "tilt", <NEW_TILT_VALUE>) # graph.set_property("ptz", "zoom", <NEW_ZOOM_VALUE>)
When you run this script, you should see your video as in the demo and the pipeline being used should be printed in console.
2. Build or Customize your own pipeline
RR-Media is designed for easier and testing testing, however, in certain situations, more control is needed so you need to go deeper into the application. In that scenario you have to options:
1. Extend rr-media to fulfill your needs
2. Build your own GStreamer pipeline.
In this section, we will cover (2). If you want to know how to extend rr-media, go to the RR-MEDIA API.
A good starting point is the GStreamer pipeline obtained while running the rr-media application. You can use it as your base and start customizing according to your needs.
1. Select your input
When working with GStreamer, it's important to define the type of input you're using—whether it's an image, video file, or camera. Here are some examples:
For example, in the case of the MP4 video called <MP4_FILE>:
INPUT="filesrc location=<MP4 file> ! qtdemux ! h264parse ! decodebin ! queue "
For a camera using NVArgus with a specific sensor ID <Camera ID>:
INPUT="nvarguscamera sensor-id=<Camera ID> ! nvvidconv ! queue "
2. Projection Setup
After defining the input, you can specify the type of projection. For instance, with a fisheye lens, use calibration values previously generated from our calibration tools.
PROJECTOR=" rrfisheyetoeqr crop=false center_x="${CENTERX[0]}" center_y="${CENTERY[0]}" radius="${RADIUS[0]}" rot-x="${ROTX[0]}" rot-y="${ROTY[0]}" rot-z="${ROTZ[0]}" lens="${LENS[0]}" name=proj0 ! queue "
3. Stitching
Next, configure the stitcher using a homography JSON file to stitch the projections created by the projector:
STITCHING="cudastitcher name=stitcher homography-list=<Homography JSON> sync=true"
4. Output Options
You can choose how you want the output to be handled—whether you want to stream, display, or save the video.
To stream using RTSP with the desired <PORT>
OUTPUT="nvv4l2h264enc ! h264parse ! video/x-h264, stream-format=avc, mapping=stream1 ! rtspsink service=<PORT> async-handling=true"
To display the output locally:
OUTPUT="DISP="nvvidconv ! queue leaky=downstream ! nveglglessink"
5. Final Pipeline
Finally, you can connect all components using gst-launch or GStreamer Daemon (GSTD), which also allows control over features like spherical PTZ, here is how you would do it with GSTD:
gstd & gstd-client pipeline_create p1 $STITCHING ! $INPUT_1 ! $PROJECTOR ! stitcher.sink_0 \ $INPUT_2 $PROJECTOR ! stitcher.sink_0 \ .... rrpanoramaptz name=ptz ! $OUTPUT
Then modify the value for pan, tilt and zoom with GSTD as follows.
gstd-client "element_set p1 ptz tilt <TILT>" gstd-client "element_set p1 ptz pan <PAN>" gstd-client "element_set p1 ptz zoom <ZOOM>"
Extend it Further
You can use GstInterpipe to link the stitched output into other modular pipelines for analytics, AI, or cloud storage.